12 research outputs found
NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction
This paper studies implicit surface reconstruction leveraging differentiable
ray casting. Previous works such as IDR and NeuS overlook the spatial context
in 3D space when predicting and rendering the surface, thereby may fail to
capture sharp local topologies such as small holes and structures. To mitigate
the limitation, we propose a flexible neural implicit representation leveraging
hierarchical voxel grids, namely Neural Deformable Anchor (NeuDA), for
high-fidelity surface reconstruction. NeuDA maintains the hierarchical anchor
grids where each vertex stores a 3D position (or anchor) instead of the direct
embedding (or feature). We optimize the anchor grids such that different local
geometry structures can be adaptively encoded. Besides, we dig into the
frequency encoding strategies and introduce a simple hierarchical positional
encoding method for the hierarchical anchor structure to flexibly exploit the
properties of high-frequency and low-frequency geometry and appearance.
Experiments on both the DTU and BlendedMVS datasets demonstrate that NeuDA can
produce promising mesh surfaces.Comment: Accepted to CVPR 2023, project page:
https://3d-front-future.github.io/neud
Toward Understanding the Influence of Individual Clients in Federated Learning
Federated learning allows mobile clients to jointly train a global model
without sending their private data to a central server. Extensive works have
studied the performance guarantee of the global model, however, it is still
unclear how each individual client influences the collaborative training
process. In this work, we defined a new notion, called {\em Fed-Influence}, to
quantify this influence over the model parameters, and proposed an effective
and efficient algorithm to estimate this metric. In particular, our design
satisfies several desirable properties: (1) it requires neither retraining nor
retracing, adding only linear computational overhead to clients and the server;
(2) it strictly maintains the tenets of federated learning, without revealing
any client's local private data; and (3) it works well on both convex and
non-convex loss functions, and does not require the final model to be optimal.
Empirical results on a synthetic dataset and the FEMNIST dataset demonstrate
that our estimation method can approximate Fed-Influence with small bias.
Further, we show an application of Fed-Influence in model debugging.Comment: Accepted at AAAI 202
Unveiling the Siren's Song: Towards Reliable Fact-Conflicting Hallucination Detection
Large Language Models (LLMs), such as ChatGPT/GPT-4, have garnered widespread
attention owing to their myriad of practical applications, yet their adoption
has been constrained by issues of fact-conflicting hallucinations across web
platforms. The assessment of factuality in text, produced by LLMs, remains
inadequately explored, extending not only to the judgment of vanilla facts but
also encompassing the evaluation of factual errors emerging in complex
inferential tasks like multi-hop, and etc. In response, we introduce FactCHD, a
fact-conflicting hallucination detection benchmark meticulously designed for
LLMs. Functioning as a pivotal tool in evaluating factuality within
"Query-Respons" contexts, our benchmark assimilates a large-scale dataset,
encapsulating a broad spectrum of factuality patterns, such as vanilla,
multi-hops, comparison, and set-operation patterns. A distinctive feature of
our benchmark is its incorporation of fact-based chains of evidence, thereby
facilitating comprehensive and conducive factual reasoning throughout the
assessment process. We evaluate multiple LLMs, demonstrating the effectiveness
of the benchmark and current methods fall short of faithfully detecting factual
errors. Furthermore, we present TRUTH-TRIANGULATOR that synthesizes reflective
considerations by tool-enhanced ChatGPT and LoRA-tuning based on Llama2, aiming
to yield more credible detection through the amalgamation of predictive results
and evidence. The benchmark dataset and source code will be made available in
https://github.com/zjunlp/FactCHD.Comment: Work in progres
Abnormal Liver Function Tests Were Associated With Adverse Clinical Outcomes: An Observational Cohort Study of 2,912 Patients With COVID-19
Background and Aim: The impact of liver function test (LFTs) abnormality on adverse clinical outcomes in coronavirus disease 2019 (COVID-19) patients remains controversial. The aim of this study was to assess the impact of abnormal LFTs on clinical outcomes in a large cohort of hospitalized patients with COVID-19.Methods: We retrospectively collected data on 2,912 consecutive patients with COVID-19 who were admitted to a makeshift hospital in China between 5 February and 23 March 2020. The association between LFTs abnormalities (baseline and peak values) and clinical outcomes was measured by using Cox regression models.Results: On admission 1,414 patients (48.6%) had abnormal LFTs, with alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin (TBIL), alkaline phosphatase (ALP), and gamma-glutamyltransferase (GGT) elevation in 662 (22.7%), 221 (7.6%), 52 (1.8%), 135 (4.6%), and 536 (18.5%) patients, respectively, and hypoalbuminemia in 737 (25.3%) patients. During a median 13 (IQR: 8–19) days of hospitalization, 61 patients (2.1%) died, 106 patients (3.6%) admitted to intensive care unit (ICU), and 75 patients (2.6%) required mechanical ventilation. After adjustment for confounders, baseline abnormal LFTs were independently associated with increased risks of mortality (adjusted HR 3.66, 95%CI 1.64–8.19, p = 0.002), ICU admission (adjusted HR 3.12 95%CI 1.86–5.23, p < 0.001), and mechanical ventilation (adjusted HR 3.00, 95%CI 1.63–5.52, p < 0.001), which was homogeneous across the severity of COVID-19 infection. Among the parameters of LTFs, the associations with the outcomes were more pronounced for AST and albumin abnormality. In contrast, ALT elevation was not significantly associated with those outcomes. Similar results were observed for peak values of LFTs during hospitalization.Conclusions: Abnormality of AST, albumin, TBIL, ALP, and GGT but not ALT were independently associated with adverse outcomes
Data-Free Evaluation of User Contributions in Federated Learning
Federated learning (FL) trains a machine learning model on mobile devices in a distributed manner using each device\u27s private data and computing resources. A critical issues is to evaluate individual users\u27 contributions so that (1) users\u27 effort in model training can be compensated with proper incentives and (2) malicious and low-quality users can be detected and removed. The state-of-the-art solutions require a representative test dataset for the evaluation purpose, but such a dataset is often unavailable and hard to synthesize. In this paper, we propose a method called Pairwise Correlated Agreement (PCA) based on the idea of peer prediction to evaluate user contribution in FL without a test dataset. PCA achieves this using the statistical correlation of the model parameters uploaded by users. We then apply PCA to designing (1) a new federated learning algorithm called Fed-PCA, and (2) a new incentive mechanism that guarantees truthfulness. We evaluate the performance of PCA and Fed-PCA using the MNIST dataset and a large industrial product recommendation dataset. The results demonstrate that our Fed-PCA outperforms the canonical FedAvg algorithm and other baseline methods in accuracy, and at the same time, PCA effectively incentivizes users to behave truthfully